Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 142
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38625778

RESUMO

Many research works have shown that the traditional alternating direction multiplier methods (ADMMs) can be better understood by continuous-time differential equations (DEs). On the other hand, many unfolded algorithms directly inherit the traditional iterations to build deep networks. Although they achieve superior practical performance and a faster convergence rate than traditional counterparts, there is a lack of clear insight into unfolded network structures. Thus, we attempt to explore the unfolded linearized ADMM (LADMM) from the perspective of DEs, and design more efficient unfolded networks. First, by proposing an unfolded Euler LADMM scheme and inspired by the trapezoid discretization, we design a new more accurate Trapezoid LADMM scheme. For the convenience of implementation, we provide its explicit version via a prediction-correction strategy. Then, to expand the representation space of unfolded networks, we design an accelerated variant of our Euler LADMM scheme, which can be interpreted as second-order DEs with stronger representation capabilities. To fully explore this representation space, we designed an accelerated Trapezoid LADMM scheme. To the best of our knowledge, this is the first work to explore a comprehensive connection with theoretical guarantees between unfolded ADMMs and first-(second-) order DEs. Finally, we instantiate our schemes as (A-)ELADMM and (A-)TLADMM with the proximal operators, and (A-)ELADMM-Net and (A-)TLADMM-Net with convolutional neural networks (CNNs). Extensive inverse problem experiments show that our Trapezoid LADMM schemes perform better than well-known methods.

2.
NPJ Digit Med ; 7(1): 97, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622284

RESUMO

Meniscal injury represents a common type of knee injury, accounting for over 50% of all knee injuries. The clinical diagnosis and treatment of meniscal injury heavily rely on magnetic resonance imaging (MRI). However, accurately diagnosing the meniscus from a comprehensive knee MRI is challenging due to its limited and weak signal, significantly impeding the precise grading of meniscal injuries. In this study, a visual interpretable fine grading (VIFG) diagnosis model has been developed to facilitate intelligent and quantified grading of meniscal injuries. Leveraging a multilevel transfer learning framework, it extracts comprehensive features and incorporates an attributional attention module to precisely locate the injured positions. Moreover, the attention-enhancing feedback module effectively concentrates on and distinguishes regions with similar grades of injury. The proposed method underwent validation on FastMRI_Knee and Xijing_Knee dataset, achieving mean grading accuracies of 0.8631 and 0.8502, surpassing the state-of-the-art grading methods notably in error-prone Grade 1 and Grade 2 cases. Additionally, the visually interpretable heatmaps generated by VIFG provide accurate depictions of actual or potential meniscus injury areas beyond human visual capability. Building upon this, a novel fine grading criterion was introduced for subtypes of meniscal injury, further classifying Grade 2 into 2a, 2b, and 2c, aligning with the anatomical knowledge of meniscal blood supply. It can provide enhanced injury-specific details, facilitating the development of more precise surgical strategies. The efficacy of this subtype classification was evidenced in 20 arthroscopic cases, underscoring the potential enhancement brought by intelligent-assisted diagnosis and treatment for meniscal injuries.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38652624

RESUMO

Recently, the multiscale problem in computer vision has gradually attracted people's attention. This article focuses on multiscale representation for object detection and recognition, comprehensively introduces the development of multiscale deep learning, and constructs an easy-to-understand, but powerful knowledge structure. First, we give the definition of scale, explain the multiscale mechanism of human vision, and then lead to the multiscale problem discussed in computer vision. Second, advanced multiscale representation methods are introduced, including pyramid representation, scale-space representation, and multiscale geometric representation. Third, the theory of multiscale deep learning is presented, which mainly discusses the multiscale modeling in convolutional neural networks (CNNs) and Vision Transformers (ViTs). Fourth, we compare the performance of multiple multiscale methods on different tasks, illustrating the effectiveness of different multiscale structural designs. Finally, based on the in-depth understanding of the existing methods, we point out several open issues and future directions for multiscale deep learning.

4.
Micromachines (Basel) ; 15(3)2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38542658

RESUMO

This paper presents a machine learning-based figure of merit model for superjunction (SJ) U-MOSFET (SSJ-UMOS) with a modulated drift region utilizing semi-insulating poly-crystalline silicon (SIPOS) pillars. This SJ drift region modulation is achieved through SIPOS pillars beneath the trench gate, focusing on optimizing the tradeoff between breakdown voltage (BV) and specific ON-resistance (RON,sp). This analytical model considers the effects of electric field modulation, charge-coupling, and majority carrier accumulation due to additional SIPOS pillars. Gaussian process regression is employed for the figure of merit (FOM = BV2/RON,sp) prediction and hyperparameter optimization, ensuring a reasonable and accurate model. A methodology is devised to determine the optimal BV-RON,sp tradeoff, surpassing the SJ silicon limit. The paper also delves into a discussion of optimal structural parameters for drift region, oxide thickness, and electric field modulation coefficients within the analytical model. The validity of the proposed model is robustly confirmed through comprehensive verification against TCAD simulation results.

5.
Artigo em Inglês | MEDLINE | ID: mdl-38408011

RESUMO

The Transformer-convolutional neural network (CNN) hybrid learning approach is gaining traction for balancing deep and shallow image features for hierarchical semantic segmentation. However, they are still confronted with a contradiction between comprehensive semantic understanding and meticulous detail extraction. To solve this problem, this article proposes a novel Transformer-CNN hybrid hierarchical network, dubbed contourlet transformer (CoT). In the CoT framework, the semantic representation process of the Transformer is unavoidably peppered with sparsely distributed points that, while not desired, demand finer detail. Therefore, we design a deep detail representation (DDR) structure to investigate their fine-grained features. First, through contourlet transform (CT), we distill the high-frequency directional components from the raw image, yielding localized features that accommodate the inductive bias of CNN. Second, a CNN deep sparse learning (DSL) module takes them as input to represent the underlying detailed features. This memory-and energy-efficient learning method can keep the same sparse pattern between input and output. Finally, the decoder hierarchically fuses the detailed features with the semantic features via an image reconstruction-like fashion. Experiments demonstrate that CoT achieves competitive performance on three benchmark datasets: PASCAL Context 57.21% mean intersection over union (mIoU), ADE20K (54.16% mIoU), and Cityscapes (84.23% mIoU). Furthermore, we conducted robustness studies to validate its resistance against various sorts of corruption. Our code is available at: https://github.com/yilinshao/CoT-Contourlet-Transformer.

6.
Neural Netw ; 168: 471-483, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37806140

RESUMO

Quantum neural network (QNN) is a neural network model based on the principles of quantum mechanics. The advantages of faster computing speed, higher memory capacity, smaller network size and elimination of catastrophic amnesia make it a new idea to solve the problem of training massive data that is difficult for classical neural networks. However, the quantum circuit of QNN are artificially designed with high circuit complexity and low precision in classification tasks. In this paper, a neural architecture search method EQNAS is proposed to improve QNN. First, initializing the quantum population after image quantum encoding. The next step is observing the quantum population and evaluating the fitness. The last is updating the quantum population. Quantum rotation gate update, quantum circuit construction and entirety interference crossover are specific operations. The last two steps need to be carried out iteratively until a satisfactory fitness is achieved. After a lot of experiments on the searched quantum neural networks, the feasibility and effectiveness of the algorithm proposed in this paper are proved, and the searched QNN is obviously better than the original algorithm. The classification accuracy on the mnist dataset and the warship dataset not only increased by 5.31% and 4.52%, respectively, but also reduced the parameters by 21.88% and 31.25% respectively. Code will be available at https://gitee.com/Pcyslist/models/tree/master/research/cv/EQNAS, and https://github.com/Pcyslist/EQNAS.


Assuntos
Algoritmos , Redes Neurais de Computação , Rotação , Evolução Biológica
7.
Artigo em Inglês | MEDLINE | ID: mdl-37819820

RESUMO

Deep neural networks (DNNs) play key roles in various artificial intelligence applications such as image classification and object recognition. However, a growing number of studies have shown that there exist adversarial examples in DNNs, which are almost imperceptibly different from the original samples but can greatly change the output of DNNs. Recently, many white-box attack algorithms have been proposed, and most of the algorithms concentrate on how to make the best use of gradients per iteration to improve adversarial performance. In this article, we focus on the properties of the widely used activation function, rectified linear unit (ReLU), and find that there exist two phenomena (i.e., wrong blocking and over transmission) misguiding the calculation of gradients for ReLU during backpropagation. Both issues enlarge the difference between the predicted changes of the loss function from gradients and corresponding actual changes and misguide the optimized direction, which results in larger perturbations. Therefore, we propose a universal gradient correction adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient-based white-box attack algorithms such as fast gradient signed method (FGSM), iterative FGSM (I-FGSM), momentum I-FGSM (MI-FGSM), and variance tuning MI-FGSM (VMI-FGSM). Through backpropagation, our approach calculates the gradient of the loss function with respect to the network input, maps the values to scores, and selects a part of them to update the misguided gradients. Comprehensive experimental results on ImageNet and CIFAR10 demonstrate that our ADV-ReLU can be easily integrated into many state-of-the-art gradient-based white-box attack algorithms, as well as transferred to black-box attacks, to further decrease perturbations measured in the l2 -norm.

8.
Artigo em Inglês | MEDLINE | ID: mdl-37672374

RESUMO

In recent years, object localization and detection methods in remote sensing images (RSIs) have received increasing attention due to their broad applications. However, most previous fully supervised methods require a large number of time-consuming and labor-intensive instance-level annotations. Compared with those fully supervised methods, weakly supervised object localization (WSOL) aims to recognize object instances using only image-level labels, which greatly saves the labeling costs of RSIs. In this article, we propose a self-directed weakly supervised strategy (SD-WSS) to perform WSOL in RSIs. To specify, we fully exploit and enhance the spatial feature extraction capability of the RSIs' classification model to accurately localize the objects of interest. To alleviate the serious discriminative region problem exhibited by previous WSOL methods, the spatial location information implicit in the classification model is carefully extracted by GradCAM ++ to guide the learning procedure. Furthermore, to eliminate the interference from complex backgrounds of RSIs, we design a novel self-directed loss to make the model optimize itself and explicitly tell it where to look. Finally, we review and annotate the existing remote sensing scene classification dataset and create two new WSOL benchmarks in RSIs, named C45V2 and PN2. We conduct extensive experiments to evaluate the proposed method and six mainstream WSOL methods with three backbones on C45V2 and PN2. The results demonstrate that our proposed method achieves better performance when compared with state-of-the-arts.

9.
Artigo em Inglês | MEDLINE | ID: mdl-37747859

RESUMO

Modeling contextual relationships in images as graph inference is an interesting and promising research topic. However, existing approaches only perform graph modeling of entities, ignoring the intrinsic geometric features of images. To overcome this problem, a novel multiresolution interpretable contourlet graph network (MICGNet) is proposed in this article. MICGNet delicately balances graph representation learning with the multiscale and multidirectional features of images, where contourlet is used to capture the hyperplanar directional singularities of images and multilevel sparse contourlet coefficients are encoded into graph for further graph representation learning. This process provides interpretable theoretical support for optimizing the model structure. Specifically, first, the superpixel-based region graph is constructed. Then, the region graph is applied to code the nonsubsampled contourlet transform (NSCT) coefficients of the image, which are considered as node features. Considering the statistical properties of the NSCT coefficients, we calculate the node similarity, i.e., the adjacency matrix, using Mahalanobis distance. Next, graph convolutional networks (GCNs) are employed to further learn more abstract multilevel NSCT-enhanced graph representations. Finally, the learnable graph assignment matrix is designed to get the geometric association representations, which accomplish the assignment of graph representations to grid feature maps. We conduct comparative experiments on six publicly available datasets, and the experimental analysis shows that MICGNet is significantly more effective and efficient than other algorithms of recent years.

10.
Artigo em Inglês | MEDLINE | ID: mdl-37610895

RESUMO

The emergence of neural architecture search (NAS) algorithms has removed the constraints on manually designed neural network architectures, so that neural network development no longer requires extensive professional knowledge, trial and error. However, the extremely high computational cost limits the development of NAS algorithms. In this article, in order to reduce computational costs and to improve the efficiency and effectiveness of evolutionary NAS (ENAS) is investigated. In this article, we present a fast ENAS framework for multiscale convolutional networks based on evolutionary knowledge transfer search (EKTS). This framework is novel, in that it combines global optimization methods with local optimization methods for search, and searches a multiscale network architecture. In this article, evolutionary computation is used as a global optimization algorithm with high robustness and wide applicability for searching neural architectures. At the same time, for fast search, we combine knowledge transfer and local fast learning to improve the search speed. In addition, we explore a multiscale gray-box structure. This gray box structure combines the Bandelet transform with convolution to improve network approximation, learning, and generalization. Finally, we compare the architectures with more than 40 different neural architectures, and the results confirmed its effectiveness.

11.
Artigo em Inglês | MEDLINE | ID: mdl-37603473

RESUMO

Recently, the excellent performance of transformer has attracted the attention of the visual community. Visual transformer models usually reshape images into sequence format and encode them sequentially. However, it is difficult to explicitly represent the relative relationship in distance and direction of visual data with typical 2-D spatial structures. Also, the temporal motion properties of consecutive frames are hardly exploited when it comes to dynamic video tasks like tracking. Therefore, we propose a novel dynamic polar spatio-temporal encoding for video scenes. We use spiral functions in polar space to fully exploit the spatial dependences of distance and direction in real scenes. We then design a dynamic relative encoding mode for continuous frames to capture the continuous spatio-temporal motion characteristics among video frames. Finally, we construct a complex-former framework with the proposed encoding applied to video-tracking tasks, where the complex fusion mode (CFM) realizes the effective fusion of scenes and positions for consecutive frames. The theoretical analysis demonstrates the feasibility and effectiveness of our proposed method. The experimental results on multiple datasets validate that our method can improve tracker performance in various video scenarios.

12.
Artigo em Inglês | MEDLINE | ID: mdl-37440377

RESUMO

Accurately extracting buildings from aerial images has essential research significance for timely understanding human intervention on the land. The distribution discrepancies between diversified unlabeled remote sensing images (changes in imaging sensor, location, and environment) and labeled historical images significantly degrade the generalization performance of deep learning algorithms. Unsupervised domain adaptation (UDA) algorithms have recently been proposed to eliminate the distribution discrepancies without re-annotating training data for new domains. Nevertheless, due to the limited information provided by a single-source domain, single-source UDA (SSUDA) is not an optimal choice when multitemporal and multiregion remote sensing images are available. We propose a multisource UDA (MSUDA) framework SPENet for building extraction, aiming at selecting, purifying, and exchanging information from multisource domains to better adapt the model to the target domain. Specifically, the framework effectively utilizes richer knowledge by extracting target-relevant information from multiple-source domains, purifying target domain information with low-level features of buildings, and exchanging target domain information in an interactive learning manner. Extensive experiments and ablation studies constructed on 12 city datasets prove the effectiveness of our method against existing state-of-the-art methods, e.g., our method achieves 59.1% intersection over union (IoU) on Austin and Kitsap → Potsdam, which surpasses the target domain supervised method by 2.2% . The code is available at https://github.com/QZangXDU/SPENet.

13.
Artigo em Inglês | MEDLINE | ID: mdl-37463078

RESUMO

Feature extraction is a key step for deep-learning-based point cloud registration. In the correspondence-free point cloud registration task, the previous work commonly aggregates deep information for global feature extraction and numerous shallow information which is positive to point cloud registration will be ignored with the deepening of the neural network. Shallow information tends to represent the structural information of the point cloud, while deep information tends to represent the semantic information of the point cloud. In addition, fusing information of different dimensions is conducive to making full use of shallow information. Inspired by this, we verify shallow information in the middle layers can bring a positive impact on the point cloud registration task. We design various architectures to combine shallow information and deep information to extract global features for point cloud registration. Experimental results on the ModelNet40 dataset illustrate that feature extractors that incorporate shallow information will bring positive performance.

14.
Artigo em Inglês | MEDLINE | ID: mdl-37436859

RESUMO

Most existing methods that cope with noisy labels usually assume that the classwise data distributions are well balanced. They are difficult to deal with the practical scenarios where training samples have imbalanced distributions, since they are not able to differentiate noisy samples from tail classes' clean samples. This article makes an early effort to tackle the image classification task in which the provided labels are noisy and have a long-tailed distribution. To deal with this problem, we propose a new learning paradigm which can screen out noisy samples by matching between inferences on weak and strong data augmentations. A leave-noise-out regularization (LNOR) is further introduced to eliminate the effect of the recognized noisy samples. Besides, we propose a prediction penalty based on the online classwise confidence levels to avoid the bias toward easy classes which are dominated by head classes. Extensive experiments on five datasets including CIFAR-10, CIFAR-100, MNIST, FashionMNIST, and Clothing1M demonstrate that the proposed method outperforms the existing algorithms for learning with long-tailed distribution and label noise.

15.
Artigo em Inglês | MEDLINE | ID: mdl-37279122

RESUMO

Domain generalization (DG) is one of the critical issues for deep learning in unknown domains. How to effectively represent domain-invariant context (DIC) is a difficult problem that DG needs to solve. Transformers have shown the potential to learn generalized features, since the powerful ability to learn global context. In this article, a novel method named patch diversity Transformer (PDTrans) is proposed to improve the DG for scene segmentation by learning global multidomain semantic relations. Specifically, patch photometric perturbation (PPP) is proposed to improve the representation of multidomain in the global context information, which helps the Transformer learn the relationship between multiple domains. Besides, patch statistics perturbation (PSP) is proposed to model the feature statistics of patches under different domain shifts, which enables the model to encode domain-invariant semantic features and improve generalization. PPP and PSP can help to diversify the source domain at the patch level and feature level. PDTrans learns context across diverse patches and takes advantage of self-attention to improve DG. Extensive experiments demonstrate the tremendous performance advantages of the PDTrans over state-of-the-art DG methods.

16.
Neural Netw ; 164: 345-356, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37163850

RESUMO

Knowledge distillation (KD) has been widely used in model compression. But, in the current multi-teacher KD algorithms, the student can only passively acquire the knowledge of the teacher's middle layer in a single form and all teachers use identical a guiding scheme to the student. To solve these problems, this paper proposes a multi-teacher KD based on joint Guidance of Probe and Adaptive Corrector (GPAC) method. First, GPAC proposes a teacher selection strategy guided by the Linear Classifier Probe (LCP). This strategy allows the student to select better teachers in the middle layer. Teachers are evaluated using the classification accuracy detected by LCP. Then, GPAC designs an adaptive multi-teacher instruction mechanism. The mechanism uses instructional weights to emphasize the student's predicted direction and reduce the student's difficulty learning from teachers. At the same time, every teacher can formulate guiding scheme according to the Kullback-Leibler divergence loss of the student and itself. Finally, GPAC develops a multi-level mechanism for adjusting spatial attention loss. this mechanism uses a piecewise function that varies with the number of epochs to adjust the spatial attention loss. This piecewise function classifies the student' learning about spatial attention into three levels, which can efficiently use spatial attention of teachers. GPAC and the current state-of-the-art distillation methods are tested on CIFAR-10 and CIFAR-100 datasets. The experimental results demonstrate that the proposed method in this paper can obtain higher classification accuracy.


Assuntos
Algoritmos , Compressão de Dados , Humanos , Conhecimento , Estudantes
17.
IEEE Trans Med Imaging ; 42(11): 3229-3243, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37216246

RESUMO

The convolutional neural network has achieved remarkable results in most medical image seg- mentation applications. However, the intrinsic locality of convolution operation has limitations in modeling the long-range dependency. Although the Transformer designed for sequence-to-sequence global prediction was born to solve this problem, it may lead to limited positioning capability due to insufficient low-level detail features. Moreover, low-level features have rich fine-grained information, which greatly impacts edge segmentation decisions of different organs. However, a simple CNN module is difficult to capture the edge information in fine-grained features, and the computational power and memory consumed in processing high-resolution 3D features are costly. This paper proposes an encoder-decoder network that effectively combines edge perception and Transformer structure to segment medical images accurately, called EPT-Net. Under this framework, this paper proposes a Dual Position Transformer to enhance the 3D spatial positioning ability effectively. In addition, as low-level features contain detailed information, we conduct an Edge Weight Guidance module to extract edge information by minimizing the edge information function without adding network parameters. Furthermore, we verified the effectiveness of the proposed method on three datasets, including SegTHOR 2019, Multi-Atlas Labeling Beyond the Cranial Vault and the re-labeled KiTS19 dataset called KiTS19-M by us. The experimental results show that EPT-Net has significantly improved compared with the state-of-the-art medical image segmentation method.


Assuntos
Redes Neurais de Computação , Crânio , Percepção , Processamento de Imagem Assistida por Computador
18.
Artigo em Inglês | MEDLINE | ID: mdl-37021901

RESUMO

The task of hyperspectral image (HSI) classification has attracted extensive attention. The rich spectral information in HSIs not only provides more detailed information but also brings a lot of redundant information. Redundant information makes spectral curves of different categories have similar trends, which leads to poor category separability. In this article, we achieve better category separability from the perspective of increasing the difference between categories and reducing the variation within category, thus improving the classification accuracy. Specifically, we propose the template spectrum-based processing module from spectral perspective, which can effectively expose the unique characteristics of different categories and reduce the difficulty of model mining key features. Second, we design an adaptive dual attention network from spatial perspective, where the target pixel can adaptively aggregate high-level features by evaluating the confidence of effective information in different receptive fields. Compared with the single adjacency scheme, the adaptive dual attention mechanism makes the ability of target pixel to combine spatial information to reduce variation more stable. Finally, we designed a dispersion loss from the classifier's perspective. By supervising the learnable parameters of the final classification layer, the loss makes the category standard eigenvectors learned by the model more dispersed, which improves the category separability and reduces the rate of misclassification. Experiments on three common datasets show that our proposed method is superior to the comparison method.

19.
Artigo em Inglês | MEDLINE | ID: mdl-37021983

RESUMO

The scene classification of remote sensing (RS) images plays an essential role in the RS community, aiming to assign the semantics to different RS scenes. With the increase of spatial resolution of RS images, high-resolution RS (HRRS) image scene classification becomes a challenging task because the contents within HRRS images are diverse in type, various in scale, and massive in volume. Recently, deep convolution neural networks (DCNNs) provide the promising results of the HRRS scene classification. Most of them regard HRRS scene classification tasks as single-label problems. In this way, the semantics represented by the manual annotation decide the final classification results directly. Although it is feasible, the various semantics hidden in HRRS images are ignored, thus resulting in inaccurate decision. To overcome this limitation, we propose a semantic-aware graph network (SAGN) for HRRS images. SAGN consists of a dense feature pyramid network (DFPN), an adaptive semantic analysis module (ASAM), a dynamic graph feature update module, and a scene decision module (SDM). Their function is to extract the multi-scale information, mine the various semantics, exploit the unstructured relations between diverse semantics, and make the decision for HRRS scenes, respectively. Instead of transforming single-label problems into multi-label issues, our SAGN elaborates the proper methods to make full use of diverse semantics hidden in HRRS images to accomplish scene classification tasks. The extensive experiments are conducted on three popular HRRS scene data sets. Experimental results show the effectiveness of the proposed SAGN.

20.
Artigo em Inglês | MEDLINE | ID: mdl-37021990

RESUMO

For complex data, high dimension and high noise are challenging problems, and deep matrix factorization shows great potential in data dimensionality reduction. In this article, a novel robust and effective deep matrix factorization framework is proposed. This method constructs a dual-angle feature for single-modal gene data to improve the effectiveness and robustness, which can solve the problem of high-dimensional tumor classification. The proposed framework consists of three parts, deep matrix factorization, double-angle decomposition, and feature purification. First, a robust deep matrix factorization (RDMF) model is proposed in the feature learning, to enhance the classification stability and obtain better feature when faced with noisy data. Second, a double-angle feature (RDMF-DA) is designed by cascading the RDMF features with sparse features, which contains the more comprehensive information in gene data. Third, to avoid the influence of redundant genes on the representation ability, a gene selection method is proposed to purify the features by RDMF-DA, based on the principle of sparse representation (SR) and gene coexpression. Finally, the proposed algorithm is applied to the gene expression profiling datasets, and the performance of the algorithm is fully verified.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...